Comparing Rater Groups: How To Disentangle Rating Reliability From Construct-Level Disagreements

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comment on "Inter-rater reliability of delirium rating scales".

lation and found a of 0.91, which is one of the many languages that this instrument is now translated into allowing international use in monitoring. Secondly, we have conducted a large-scale implementation study of the CAM-ICU. This year-long quality assurance/quality improvement project included 55 nurses, 711 patients, and two different medical centers [5] . Data were recorded prospectively a...

متن کامل

How to assess the reliability of cerebral microbleed rating?

Interest in cerebral microbleeds has grown rapidly over the past years. The need for sensitive and reliable detection of microbleeds has spurred the development of new MR sequences and standardized visual rating scales (Cordonnier et al., 2009; Gregoire et al., 2009). The value of these rating scales is currently assessed by measuring the inter-rater agreement, which is commonly determined usin...

متن کامل

Inter-Intra Rater Reliability, Construct and Discriminative Validity of Iranian Typically Developing Children Handwriting Speed Test (I-CHST)

Objectives: The purpose of this study was to develop an Iranian Hand writing Speed Test (I-CHST) for testing of Iranian students aged 8-12. To date, no  norms of handwriting speed have been published for hand-writing speed of the Iranian students. Methods: A sample of 400 typically developing Iranian students across four age cohorts was recruited. Among those 400 students 50% were girls...

متن کامل

Improving Performance by Re-Rating in the Dynamic Estimation of Rater Reliability

Nowadays crowdsourcing is widely used in supervised machine learning to facilitate the collection of ratings for unlabelled training sets. In order to get good quality results it is worth rejecting results from noisy/unreliable raters, as soon as they are discovered. Many techniques for filtering unreliable raters rely on the presentation of training instances to the raters identified as most a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Industrial and Organizational Psychology

سال: 2016

ISSN: 1754-9426,1754-9434

DOI: 10.1017/iop.2016.88